9 research outputs found

    Robotic Wireless Sensor Networks

    Full text link
    In this chapter, we present a literature survey of an emerging, cutting-edge, and multi-disciplinary field of research at the intersection of Robotics and Wireless Sensor Networks (WSN) which we refer to as Robotic Wireless Sensor Networks (RWSN). We define a RWSN as an autonomous networked multi-robot system that aims to achieve certain sensing goals while meeting and maintaining certain communication performance requirements, through cooperative control, learning and adaptation. While both of the component areas, i.e., Robotics and WSN, are very well-known and well-explored, there exist a whole set of new opportunities and research directions at the intersection of these two fields which are relatively or even completely unexplored. One such example would be the use of a set of robotic routers to set up a temporary communication path between a sender and a receiver that uses the controlled mobility to the advantage of packet routing. We find that there exist only a limited number of articles to be directly categorized as RWSN related works whereas there exist a range of articles in the robotics and the WSN literature that are also relevant to this new field of research. To connect the dots, we first identify the core problems and research trends related to RWSN such as connectivity, localization, routing, and robust flow of information. Next, we classify the existing research on RWSN as well as the relevant state-of-the-arts from robotics and WSN community according to the problems and trends identified in the first step. Lastly, we analyze what is missing in the existing literature, and identify topics that require more research attention in the future

    Robotic visual servoing and robotic assembly tasks

    No full text

    The Use of Computer Vision in Monitoring Weaving Sections

    No full text

    Behavior Acquisition via Vision-Based Robot Learning

    No full text
    We introduce our approach that makes a robot learn to behave adequately to accomplish a given task at hand through the interactions with its environment with less a priori knowledge about the environment or the robot itself. We briey present three research topics of vision-based robot learning in each of which visual perception is tightly coupled with actuator eects so as to learn an adequate behavior. First, a method of vision-based reinforcement learning by which a robot learns to shoot a ball into a goal is presented. Next, \motion sketch" for a one-eyed mobile robot to learn several behaviors such as obstacle avoidance and target pursuit is introduced. Finally, we show a method of purposive visual control consisting of an on-line estimator and a feedback/feedforward controller for uncalibrated camera-manipulator systems. All topics include the real robot experiments. 1 Introduction Realization of autonomous agents that organize their own internal structure in order to take actio..
    corecore